• 0 Posts
  • 89 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle
  • Might I add the idea that your terminal emulator must support your shell is utterly ridiculous?

    TBH I am starting to come around to the idea of a tightly integrated shell and terminal emulator support. There are just things you cannot do with these being separate things. I am very tempted to explore the idea from the other end though - writing a shell that has a emulator built into it (like screen/tmux basically are). But I do think that this integration is needed for any per command features that is not just printing out a prompt.

    It would be interesting to see what could be done with this type of integration but will likely break support for existing shells. Unless you maybe launch a shell for each command you run or something 🤔. Would like to seem more people experimenting with stuff like this and see what new things we could drive forward. We have been stuck with the current tty system since like the 80s to support devices that just dont exist anymore.


  • I think the issue fundamentally is that this isn’t what terminal emulators are. The terminal emulator initializes a TTY session and enters a shell environment (sh, zsh, fish, etc). The medium is text and cannot be anything else.

    This is 90s thinking. Why must terminal emulators only be text and only do things that a physical terminal could? What makes teminals so nice is not that they work on 90s technology. Some terminal emualtors can already display images. Which is great. And the ideas they are introducing are still fundamentally text based, but are geared towards structuring that texts a bit more than a constant stream of characters on the screen.

    Skill issue. Pipe your output to something (like a file or the “less” command)

    This is a convenience issue not a skill issue. Yes you can pipe output to things but you need to know before hand that you want to do that. And with less you lose that output once you close less. And with files you have to clean them up after the fact. Both of these are inconvenient and need to be thought of before you run a command. IMO it would be nice to just run a command and then be able to collapse or expand or search its output after the fact and not have to preempt everything beforehand.

    The argument that you can already do that in a much less convenient way is not a very good argument.


  • Konsole can display images, as can kitty alacritty, western, iterm2, etc.

    They can now? I know it was possible in some niche terminals but never knew it was as wide spread as that.

    but it’s not exactly game changing

    None of these features on their own are game changing I agree. But lots of small nice to haves can end up being game changing overall. Again - I don’t think these terminals offer anywhere near enough to warrant their IMO massive downsides though. But I would love to see more innovation in the terminal emulator space.

    Lastly, searching explicitly your last command for a term with context would be much better suited to the shell to solve as it’d be terminal independent.

    I had a similar thought TBH. But the more I thought about it the more I came to see that in order to do this nicely - ie with inline scroll back or being able to collapse command output like these terminals do then you would basically need to implement a terminal emulator into the shell. Either way you are breaking down the wall between what a shell and a terminal emulator are doing. I would be interesting in exploring this from the shell side, though I cannot fault them from doing it from the emulator side either.

    couldn’t be solved at the shells level or with supplementary applications

    I think the key benefit here is integration rather than technical ability to do something. Making it easy and convenient to do goes a long way. There is a lot that can be made much nicer with things more tightly integrated together than trying to string up a bunch of disparate applications together - even if you can do it the integrated approach will give you a much more refined experience.

    I doubt they’re outright rejecting any idea of progress

    It sounds like an outright dismissal of new features to me.


  • nous@programming.devtoLinux@lemmy.mlNew terminal apps: Warp and Wave
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    8
    ·
    7 hours ago

    That is the Luddites argument against progressing anything. There are many problems with current terminal emulators that these newer ones are trying to fix and make the terminal experience better overall. Terminals as they currently work were designed the way they are to talk to dumb typewriters with a screen (that’s right, not keyboards, digital typewriters). And they have barely changed at all since then.

    Personally looking at these terminals they have a lot of niceties that I would love to use. But IMO these benefits are not worth the costs these particular terminals also have. One being closed source and requiring an account and the other being electron - no benefit is worth that. But to bury your head in the sand and claim they have no benefits at all is wrong.

    Begin able to view images in the terminal would be amazing alone - just like you can cat a text file. I would hate to need to launch a GUI program every time I wanted to see what was inside a text file but that is exactly what I need to do for images or PDFs.

    Being able to collapse the output of a command would be nice as well. The number of times I have had to scroll for days to get to the output of a previous command because I happen to run a noisy one but still want to check what something previously had done would save me quite a bit of annoyance. And being able to search just the last commands output would be great - like an after the fact, interactive grep with context. And being able to quickly copy the output of the last command would also be great.


  • Just factor it into your estimates and make it a requirement to the work. Don’t talk to managers as though it is some optional bit of work that can be done in isolation. If you do frequent refactoring before you start a feature then it does not add a load of time as it saves a bunch of time when adding the feature. And helps keep your code base cleaner over the longer term leading to fewer times you need to do larger refactors.


  • Big refactorings are a bad idea.

    IMO what is worst then big refactors are when people refactor and change behavior at the same time. A refactor means you change the code without changing its behavior. When you end up mixing these up it becomes very hard to tell if a behavioral change is intended or a new bug. When you keep the behavior the same then it is easier to spot an accidental change in behavior in what otherwise should be a no-op change.

    And if you can, try to keep your refactors to one type of change. I have done and seen many 100 or even 1000s of lines changed in a refactor - but was kept manageable because it was the same basic pattern changing across a whole code base. For instance. Say you want to change logging libraries, or introduce one from simple print statements. It is best to add the new library first, maybe with a few example uses in one PR. Then do a bulk edit of all the instances (maybe per module or section of code for very large code bases) for simply switching log instances to the new library.

    If you don’t know what an API should look like, write the tests first as it’ll force you to think of the “customer” which in this case is you.

    I think there is some nuance here that is rarely ever talked about. When I see people trying this for the first time they often think you need to write a full test up front, then get frustrated as that is hard or they are not quite yet sure what they want to do yet. And quite often fall back to just writing the code first as they feel trying to write a test before they have a solid understanding of what they want is a waste.

    But - you don’t need to start with a complete test. Start as simple as you can. I quite often start with the hello world of tests - create a new test and assert false. Then see if the test fails as expected. That tells me I have my environment setup correctly and is a great place to start. Then if I am unsure exactly what I want to write, I start inside the test and call a function with a name that I think I want. No need for parameters or return types yet, just give the function a name. That will cause the code to fail to compile, so write a stub method/class to get things working again. Then start thinking about how the user will want to call it and refactor the test to add parameters or expect return types, flipping back to the implementation to get the code compiling again.

    You can use this to explore how you want the API to look as you are writing the client side and library side at the same time. You can just use the test as a way to see how the caller will call the code. No need to start with asserting behavior yet at all. I will even sometimes just debug print values in the test or implementation or even just write code in the test that calls into a third party library that I am new to to see how it works. With no intention that that test will even be included in the final PR - I just use tests as a staging ground to test ideas. Don’t feel like every test you write you need to keep.

    Sometimes I skip the testing framework altogether and just test the main binary in a simple situation. I especially do this for simpler binaries that are meant to mostly do one thing and dont really need a full testing framework. But I still do the red/green/refactor loop of TDD. I am just very loose on what I consider a valid “test”.

    The second time you’re introducing duplication (i.e., three copies), don’t. You should have enough data points to create a good enough abstraction.

    This missed one big caviat. The size of the code being duplicated. If it is only a few lines then don’t worry so much. 5 or even 10 copies of a 2 line change is not a big issue and quite often far harder to read and maintain then any attempt at de-duping it. As the amount of code you need to copy/paste grows though then it becomes more advantageous to abstract it with fewer copies.

    if you find yourself finding it difficult to mock

    I hate mocks. IMO they should be the last resort for testing things. They bake far too many assumptions about the code being mocked out and they do it for every test you write. IMO just test as much real behavior as you can. As long as your tests are fast and repeatable then you dont need to be mocking things out - especially internal behaviors. And when you do need to talk to an external service of some kind then I would start with a fake implementation of the service before a mock. A fake implementation is just a simple, likely in memory, implementation of the given API/interface/endpoints or whatever.

    With a mock you bake assumptions about the behavior into every mock you write - which is generally every test you write. If your assumptions are off then you need to find and refactor every test you have that has that assumption. With a fake implementation you just update the fake and you should not need to touch your tests. And you can write a fake once and use it on all your tests (or better yet use a third party one if one is available (for instance I quite often use goflakes3 - a golang in memory implementation for aws s3 service).


  • Almost. But with one key difference. PPAs are precompiled binaries where you cannot inspect the source - you have to trust the maintainer of the PPA. AUR is a repository of source packages which you can download and inspect yourself (or hope others have done this). This makes AUR more community focused than PPAs I feel. AUR is also a central repo managed by people that dont own the vast majority of the packages hosted on it and where packages can be taken down if found malicious. PPAs are lots of separate repositories all managed by different people that generally maintain all the packages for their PPA.

    Though in both cases anyone can upload anything to them, so they are not 100% trustworthy. But I do think the way AUR works puts them ahead of PPAs.


  • Or just thing.age() which is fine and is fairly obvious it will return the age of the thing. And that is what is done in most languages that dont have computed properties. get_ on a method really adds no value nor clarity to things. The only reason foo() is ambiguous is because it is a bad name - really just a place holder. Missing out the brackets here adds no value either, just makes it hard to tell that you are calling a function instead of just accessing a property.


  • Make its usage cleaner? I don’t see how a property does that at all. We are talking about x.foo vs x.foo() really. And IMO the latter tells you this is a function that needs to do some work even if that work is very cheap. x.foo implies that you might be able to set the value as well. But with computed properties maybe not. Which IMO makes the program a bit harder to read and understand as you cannot simply assume it is a simple assignment or field access. It could be a full function call that does different things depending on other values or even if you are setting vs getting the value. I prefer things being more explicit.


  • It is such a weak smell though you might as well look at any bit of code you have and ask yourself if it is bad code. Lambdas are fine in a lot of places and the existence of them is not an indication of good or bad code. It is just a tool that can be used in lots of situations.

    A better smell here is excessive inlineing causing a loss of context. Does not matter if it is a lambda, or a parameter to a function call, or a field in a object creation. None of those are signs of bad code, but if you cannot understand what something anonymous is doing you might want to give it a name. This does not mean creating a named function, but might just be assigning the lambda or parameter to a variable so it is named.

    But on the flip side I find it you are struggling to name something then that can also be a smell that maybe it should just be inlined. Giving everything a name can create just as bad code as trying to inline everything. There is a balance in the middle somewhere. And the presence of a lambda does not really hint as to which way you want to go. So its existence is a very poor marker for code quality.

    it’s probably worth taking a second to ask yourself

    You can extend this to all code, not just lambdas. Any code you can take a second to ask yourself if you could write it better or more readable. If that is the bar then all code has a very weak code smell and singling out lambdas here seems arbitrary.


  • I believe that either gender has a genetic disposition towards the feminine form.

    I am not sure you can conclude that. Environmental factors likely play a large role here as well as genetic factors. I feel we tend to idolize and sexualize the female form far more then the male form these days. But if you look back and different cultures that did the same with the male form I suspect you would see an opposite trend to both genders preferring the male form more often.



  • lambdas are actually a (very mild) code smell

    Lambdas alone are not a code smell. That is like saying Objects or Functions or even naming things is a code smell just because you can use them in bad ways. It is just to broad a statement to be useful. At best you might say that large/complex anonymous lambdas are a code smell - just like large/complex and badly named functions are. You need to be specific about code smells, otherwise you are basically saying code is a code smell.


  • Functions do something, while properties are something.

    This is my argument against them. Computed properties do something, they compute a value. This may or may not be cheap and adds surprising behavior to the property. IMO properties should just be cheap accessors to values. If it needs to be computed then seeing a function call can hint the caller may want to cache the value in a variable if they need to use it multiple times. With properties you need to look it up to know it is actually doing work instead of just giving you a value. That is surprising behavior which IMO I dislike in programs.


  • Functions can do all this. Computed properties are just syntactic sugar for methods. That is it. IMO it makes it more confusing for the caller. Accessing a property should be cheap - but with computed properties you don’t know it will be. Especially with caching as your example. The first access can be surprisingly slow with the next being much faster. I prefer things to not do surprising things when using them.



    1. Changing the coding style substantially

    Well refactoring is the only place you can change the coding style… But really a change to the coding style should not be done blindly, but with by in from team before you start changing it. It is not uncommon for a team to make bad decisions throughout the course of a project, including the style they picked. And if you ask around you might find others have the same idea as you and don’t like the current style. Then you can have a conversation about it. But you should not jump in and start changing the style of a code base before you talk to your team.

    One of the biggest problems I’ve seen is refactoring code while learning it, in order to learn it. This is a terrible idea. I’ve seen comments that you should work with a given piece of code for 6-9 months. Otherwise, you are likely to create bugs, hurt performance, etc.

    I disagree with this. No refactoring for 6 to 9 months? That is just insane. IMO refactoring is a great place to learn how the code works. But it does need review of someone more experience to see if you are missing anything important. They can then point out why those bits are important and you just learned a bit more about the code base.

    If no one else knows then what is 6 to 9 months going to achieve? The lesson here is to not refactor blindly, but to seek out others that understand the code before you refactor, and failing that spend more time trying to understand it before refactoring.

    1. Overly consolidating code

    Another way to put this is don’t overly DRY your code. And here I would stick with the original code, not either refactor. It saves no lines and just makes the lines far longer. So much so it gives my a horizontal scrollbar on it…


    But overall there are some good ideas in the post. Though I do dislike how they are using the term refactoring. All of these apply to any code you write, not just refactoring existing code.



  • These are not quite equivalent. In terms of short-circuiting yeah they both short-circuit when they get the value. But the latter is returning from the current function and the former is not. If you add a return to that first example then they are equivalent. But then cannot be used in line. Which is a nice advantage to the former - it can be used inline with less faff as you can just assign the return to a value. The latter needs you to declare a variable, assign it and break from the loop in the if.

    Personally I quite like how the former requires less modification to work in different contexts and find it nicer to read. Though not all logic is easier to read with a stream, sometimes a good old for loop makes the code more readable. Use which ever helps you best at each point. Never blindly apply some pattern to every situation.


  • I believe that VKD3D can give you directx support. Proton should be able to run most games these days, which is essentially a bundle of wine + vkd3d and other things. This is what valve created to run games on steam on linux/steamdeck. https://www.protondb.com/ shows what is able to run on it and it is most things that do not have some form of incompatible anticheat.

    You might have more luck not using wine directly (if that is what you are doing) and using things like steam (you can add external games to it to run them in a proton context) or lutris or heroic games launcher.