• 0 Posts
  • 50 Comments
Joined 4 months ago
cake
Cake day: July 10th, 2024

help-circle

  • For the non-roboticists: SLAM = Simultaneous Localization And Mapping.

    In robot navigation problems we often face the problem to get a grasp of the environment and the robot’s position in it. It’s easier if there’s already a map provided and some sort of external observer who knows where the robot is relative to the map.

    Since people don’t usually go into your home to map it out and install some sensors in order to locate the robot, SLAM is the way to go. While moving through an environment, a map of the environment is created and by utilzing some fancy techniques based on sensor data like from cameras, mic+loudspeaker, LIDAR or whatever, it is possible to also infer the robot’s position.



  • If it was up to me our industries would’ve never left the country in the first place, and most of the privatizations wouldn’t have gone through.

    Alright, I see it the same way. Still, Germany managed to manoeuvre itself into dependencies of Russia and China, which has shot and still shoots them into the feet.
    I’m totally in favour of trying to find diplomatic ways. But if the call to the talking table is not followed and rather met with aggression on multiple levels, it’s usually the wrong way to give into the demands of the ones who are not afraid to use violence. Therefore, what you see as “funding aggression” is to me a display of resistance. It shows that we will not be bullied into submission, nor will we allow those who use violence to dictate the terms of peace or cooperation.

    nazi like rhetoric going around nowadays. Green politicians talking about “the poison of Islam” [1], which reminds me a lot of the antisemitic rhetoric from WW2 (see “Der Giftpilz” [2]

    I’ve read the article and watched the speech of Katharina Dröge afterwards to get a grasp of the context. As I’ve suspected, the article of the far-right magazine “Junge Freiheit” over-emphasized the “Gift des Islams” part of her speech. It’s just typical click- & ragebait again and a very misleading headline. At least the article itself somehow manages to not completeley misrepresent her actual speech.
    If you’re interested, you can currently watch it here: https://www.ardmediathek.de/video/phoenix-parlament/katharina-droege-in-der-generaldebatte/phoenix/Y3JpZDovL3Bob2VuaXguZGUvNDU4NjA2Mg
    You’ll probably notice as well that this was a rather minor phrasing. More importantly, it was embedded in and directed towards islamism, i.e., people radicalized to the point of becoming murderous, which has to be prevented of course. In the same speech she is emphasizing the importance of asylum for all of those who have suffered the worst and don’t become radicalized criminals.
    Given this context and the fact that the German Greens are usually considered a rather left-leaning political party, I find the comparison to ‘Der Giftpilz’ not only vastly misplaced but also ridiculous.

    Besides that, Germany didn’t denazify properly after WW2 anyway (“Persilscheine”

    Thanks for pointing out the “Persilscheine”. Despite the tremendous amount of “Nazis evil”-content in school, especially in history classes, this wasn’t a topic. An educational gap I’m eager to fill soon.
    Regarding the statement of unproper denazification I can’t add anything to that besides personal impressions which have no value for general statements.

    The chancellor talking about “deportations in big style”. The CDU trying to ban refugees from going to any public events. And these are not even the nazis in the AfD. It might be 2024, but mentalities haven’t changed much, we’re just picking other out groups to stomp on, mostly because we’re not tackling the real issues at heart.

    Yes, yes. This is indeed really bad. From my point of view the big old parties SPD, CDU/CSU are fearing for their public support. And instead of trying to address the real issues, they’re mimicking talking points of the AfD. The latter, unfortunately, becoming increasingly popular in many areas of Germany.
    I wonder why that is.
    No, I don’t.
    (Okay, people being too incompetent to critically think about media adds to that.)

    However, I wouldn’t go as far as to say, that the mentalities haven’t changed much in all that time since WW2. Three generations were raised since then with the fourth one reaching maturity. And there is still a tremendous amount of people who are not sharing the same xenophobic idiotism propagated by AfD, CDU & Co. It’s not too late to prevent the mistakes of the past.

    But again, to get back on China, Germany is very well conducting major business with a ton of authoritarian countries, stomping on workers’ rights all across the world just to enrich German companies, and thus I won’t take their virtue signalling for anything more than just virtue signalling.

    I’m also not really happy about that. It’s one devil replaced by the other. However, there are different shades to that. At least the one devils have not launched a full-scale war. And now Germans have started to question their dependencies on foreign countries a bit more. But of course it can’t be a long-term solution to keep things as they are now.

    I’ll take virtue signalling. “It’s something”. Besides, the current government is the most productive since the Merkel-era and has initiated and achieved many good things. Although I agree that regarding foreign affairs it could be better. Most progress was achieved in domestic affairs.

    I’m here just pointing out the hypocrisy. If they care so much about Taiwan, they should at least make it clear that it is due to geostrategic interests, not because they suddenly found their love for democracy and what not other nonsense.

    And people love hypocrits. If someone says the one thing, but does the opposite, does it make them wrong in what they said?
    How about we criticise the bad and praise the good?


  • God forbid people have some self expression

    They do indeed forbid it.

    10 "If you go to battle against your enemies, and the LORD your God delivers them into your control, you may take some prisoners captive. 11 If you see among the prisoners a beautiful woman and you desire her, then you may take her as your wife. 12 Bring her to your house, but shave her head and trim her nails

    Deuteronomy 21

    Oh man, religions are batshit crazy.









  • My point is, that the following statement is not entirely correct:

    When AI systems ingest copyrighted works, they’re extracting general patterns and concepts […] not copying specific text or images.

    One obvious flaw in that sentence is the general statement about AI systems. There are huge differences between different realms of AI. Failing to address those by at least mentioning that briefly, disqualifies the author regarding factual correctness. For example, there are a plethora of non-generative AIs, meaning those, not generating texts, audio or images/videos, but merely operating as a classifier or clustering algorithm for instance, which are - without further modifications - not intended to replicate data similar to its inputs but rather provide insights.
    However, I can overlook this as the author might have just not thought about that in the very moment of writing.

    Next:
    While it is true that transformer models like ChatGPT try to learn patterns, the most likely token for the next possible output in a sequence of contextually coherent data, given the right context it is not unlikely that it may reproduce its training data nearly or even completely identically as I’ve demonstrated before. The less data is available for a specific context to generalise from, the more likely it becomes that the model just replicates its training data. This is in principle fine because this is what such models are designed to do: draw the best possible conclusions from the available data to predict the next output in a sequence. (That’s one of the reasons why they need such an insane amount of data to be trained on.)
    This can ultimately lead to occurences of indeed “copying specific texts or images”.

    but the fact that you prompted the system to do it seems to kind of dilute this point a bit

    It doesn’t matter whether I directly prompted it for it. I set the correct context to achieve this kind of behaviour, because context matters most for transformer models. Directly prompting it do do that was just an easy way of setting the required context. I’ve occasionally observed ChatGPT replicating identical sentences from some (copyright-protected) scientific literature when I used it to get an overview over some specific topic and also had books or papers about that on hand. The latter demonstrates again that transformers become more likely to replicate training data the more “specific” a context becomes, i.e., having significantly less training data available for that context than about others.



  • The position with the vegan cats is basically indefensible.

    What do all organisms, including animals, need to properly maintain their metabolism?
    Nutrients.
    What are nutrients?
    A bunch of different chemicals.

    Depending on the specific organism, another set of nutrients is required, also varying in amount of course.

    All required nutrients for humans at least can be obtained or synthesized from non-animal compounds.

    From that simplified perspective, it’s absolutely rational to explore how we could feed animals like cats on a purely vegan diet.
    But it’s certainly nothing which should be left to do for the layman alone, as veterinarian care is advisable if harming the animal should be avoided.






  • If we’re speaking of transformer models like ChatGPT, BERT or whatever: They don’t have memory at all.

    The closest thing that resembles memory is the accepted length of the input sequence combined with the attention mechanism. (If left unmodified though, this will lead to a quadratic increase in computation time the longer that sequence becomes.) And since the attention weights are a learned property, it is in practise probable that earlier tokens of the input sequence get basically ignored the further they lie “in the past”, as they usually do not contribute much to the current context.

    “In the past”: Transformers technically “see” the whole input sequence at once. But they are equipped with positional encoding which incorporates spatial and/or temporal ordering into the input sequence (e.g., position of words in a sentence). That way they can model sequential relationships as those found in natural language (sentences), videos, movement trajectories and other kinds of contextually coherent sequences.