No.11296
>It is statistics with a pseudo random number generator on top to induce variations
1. Yes, it is.
2. How does it exclude intelligence?
3. In which way our brains are fundamentally better?
No.11297
>>11296>>11295And we have the first philosophical problem. What or how is intelligence?
I say philosophical because it is a conceptt. humanities Ernst.
Catherine Malabou has an interesting book on the concept of intelligence and she develops her own concept of it, which I think is quite nice. I will try to sum it up in a later post since I had to do a presentation on it in the past.
Generally it is a good idea to have a thread on it. There can be said a lot about it.
No.11304
>>11296> 2. How does it exclude intelligence?>>11297I've recently read an analyses that tried to compare current A.I. with human intelligence. It chopped intelligence in different subcategories, which I can't quite remember now. iirc, it basically was something like "A.I. is good with finding patterns and similarities within a lot of data points, even slightly better then the average human, but A.I. can't even do some other simple tasks every child can do."
> 3. In which way our brains are fundamentally better?I seriously do not feel qualified to answer this question, since I don't even have beginner biology student level understanding of carbon based lifeforms; much less specialized knowledge about how the brain functions. Because of that I can only work with things I see emerging from these biological structures.
I would argue the parts we would describe as "intelligence" within A.I. are superimposed by the creator of the reward function(s) evaluating and guiding the pseudo random number variations. It is not inherent to the A.I.. You just get a watered down version of the intelligence of the developer, who uses tons of data points and iterations to get a very niche and limited statistical average, which can afterwards be applied to
only similar data points the model has been trained with.
Another point I would somewhat ascribe to intelligence would be creativity; the act of creating something, that hasn't been before - not mere combining of things which already have been. I do not see our current tools reaching this point any time soon.
Energy efficiency is another point I would describe our brains as "fundamentally being better" at. Though, that might not be too fundamental and just a really huge difference.
Another big problem I see: consciousness.
> cogito ergo sum.We have no idea where or how "I" is; we just know "I" has to be. Can intelligence be without a consciousness? e.g. are honey combs intelligent? Honey bees do not build hexagonal tubes; they build round ones. Yet, honey combs form as an emergence pattern following physical rules increasing the volume they can hold while minimizing the material needed and being structural robust. This clearly is intelligent behaviour, yes?
Can something display intelligent behaviour without having intelligence of its own?
No.11309 KONTRA
>>11297I will no expand as we are already talking about technicalities.
So from what I gathered of the third chapter
the title of that chapter is: Like a Pollock Painting of her book:
First of all, she references heavily on John Dewey, an American Pragmatist
lived around 1900 I think up until the 1930s or 1940s maybe even longer dunno.
Her concept of intelligence is a sort of development of Dewey's concept of intelligence and is cordoned off from two other concepts of intelligence: one is a concept of intelligence as simply genetic and measurable with the IQ test and the other is an epigenetic intelligence concept that sets intelligence in dependence to environmental factors which conceptualizes intelligence as contingent (in contrast to the solely genetic one).
Intelligence is for her and coming from Dewey is something sort of like a habit. A method to solve problems and a method that can solve its own problems in a way by reorganizing, and interrupting your own pattern/habit:
>Without contradicting itself, the automatism of intelligence thus appears as the mechanism able to interrupt its own routine (the rigid repetition of its habits) without becoming anything other than an automatism (an autonomous process). (108)Intelligence is what can rework the method of intelligence itself. The book among other things is trying to argue that this definition of intelligence makes it possible to dissolve the boundary between people and machines. A machine that can interrupt it's own intelligent behavior to solve problem in a different way might then be called intelligent.
There can be said more about the concept than I am doing here but it is late and I am tired and maybe I misrepresented a few things that way.
No.11321
>>11304I've come up with better explanation of my point. When you say that "AI is just a statistics" it's like saying that "brain is just a chemical reaction" or "poem is just a sequence of letters" or "your favorite hentai is just a sequence of 0-s and 1-s on disk". Formally true, but complex and meaningful things can be a combination of many trivial elements.
> I would argue the parts we would describe as "intelligence" within A.I. are superimposed by the creator of the reward function(s) evaluating and guiding the pseudo random number variations. It is not inherent to the A.I..In case of humans it would mean that actual intelligence is "superimposed" by evolution rewarding us for survival and reproduction and not inherit to us.
> You just get a watered down version of the intelligence of the developer, who uses tons of data points and iterationsNo. If you can come up with formula like "let reward function be logarithm of probability of predicting next token", it doesn't mean that you have "concentrated knowledge" of the model which you've trained. You may argue that
data has this knowledge, but then again it'll bring you to the question whether
our knowledge is inherit to us or just derived from experience of universe around us.
> to get a very niche and limited statistical average, which can afterwards be applied to only similar data points the model has been trained withModel generalizes patterns it sees in training data and can work with something it hasn't seen before by extrapolating older experience. Of course it can't work with something entirely new, but then again, can you? I think not. You'll have to try, to fuck around and find out, to gather some experience about new phenomena and then act according to that experience.
> Another point I would somewhat ascribe to intelligence would be creativity; the act of creating something, that hasn't been before - not mere combining of things which already have been. I do not see our current tools reaching this point any time soon.It raises the question of whether humans are able to create something new of the thin air. Or do they just incrementally create new things by combining old ones until you can't recognize initial inspirations in the final result.
> Another big problem I see: consciousness.Why do you think that AI doesn't have it? Consciousness is something unverifiable in physical universe. I can only sure be about my own consciousness. And since I don't think that I'm special, I assume that other humans have it too.
What modern AI lacks in my opinion compared to humans is logical thinking. Which is funny because classical AI such as theorem proovers has merely this type of intellect. Maybe the future is combining these two approaches.
Also it has bad short-term memory but probably it's solvable by better architectures.
No.11337
>>11296because human minds don't work the way language models do
this is not an opinion, it is how it be
ergo, current AIs are not like minds.
this is just a terminal case of "the human mind is just like a [insert the most advanced technology humans have invented so far"
>>11321>When you say that "AI is just a statistics" it's like saying that "brain is just a chemical reaction" or "poem is just a sequence of letters" or "your favorite hentai is just a sequence of 0-s and 1-s on disk"the difference is that AI really is just statistics, and brains chemical reactions can do much more things than statistics (in fact, brains are pretty bad at statistics), and poems aren't written by trying to predict a piece of text that looks like a poem according to some function, and AI porn sucks.
No.11338
>>11337Poems are written exactly like you say. Pretty has some metric embedded in his brain that measures the poem-ness of a list of tokens, he them tries to create a list of tokens that maximizes the function.
You are a hopeless romantic and you are wrong. I think I will let AI do my shit posting from now on, no one will be the wiser and it will insult all of you much better than I ever could.
No.11356
>>11337> because human minds don't work the way language models doI agree. Language model are much less anthropomorphic than they appear to be. But I haven't claimed otherwise, I asked "how is chemical reaction better than statistics", not "how is it different".
> brains chemical reactions can do much more things than statisticsThis is not obvious to me, for example who would have thought that drawing pictures by description is "just a statistics".
No.11390
People who think LLMs are "just statistics" should take a few minutes to look at the actual math involed. It's not "just statistics".
But that doesn't mean LLMs are "like human brains", they are not. Too tired right now to go into a long explanation, it has to do with structure of networks being different at the very least. They are more different than just that, but one significant difference is already enough to say that they aren't the same either way.
No.11392
>>11390Who cares about what goes inside. Results speak for themselves. I agree: LLMs are not like human brains, they are far superior.
No.11401
>>11390>People who think LLMs are "just statistics" should take a few minutes to look at the actual math involed. It's not "just statistics".But I took weeks to look at actual math involved. =D
OK, it subjective and depends on how broad do you define "just a statistics".
Pretty interesting reading btw. In math you look at extremely complex theorem and it was proved by some faggot in wig 400 years ago who didn't even have modern algebraic notation. In ML very simple and beautiful heuristics can be invented 2 years ago. "Wow, that could be me..."
Also almost nothing is proven strictly, everything is intuitive. Working as ML engineer must be a mess, there is no sure way to know what will work and what won't, you make educated guess, wait for the model to train (from few minutes to weeks depending on the model) and look whether metrics go up.
No.11404
>>11392>LLMs are not like human brains, they are far superior.Not until they have been linked with the blockchain, they aren't.
No.11419
>>11356>drawing pictures by description is "just a statistics".The "by description" part is a deception and a brilliant marketing tactic.
If you've tried to generate a very specific picture with AI image generators, you will reach a point where you're not "describing" anything per se, but trying to trick it into generating the thing you want.
It's the same mode of interaction as trying to trick Google search giving you a page that isn't a SEO optimized AI generated blog, or trying to get the Gelbooru tag system to give you one specific image you remember masturbating to 5 years ago: a decidedly non-linguistic, digital form of interaction.
That's because the prompts are not "descriptions", they're a text based interface for turning the knobs on various weights in the input.
One example of a thing you can explain by an actual human description, but can't do with prompts, is specifying the spatial relationship, position and rotation of the objects in the image. Because nobody bothered to tag the original dataset of images with such things.
Again, what AI's do can not be called "drawing", in that it's only a subset of all the things humans can do when they draw. It's very good and much better than humans at that subset of things, but it's not really "drawing by description".
No.11430
>>11419>That's because the prompts are not "descriptions", they're a text based interface for turning the knobs on various weights in the input.Same on human artists.
You ask them to draw a portray of Jennifer Aniston, then they will generate something that optimizes for activation of their Jennifer-Aniston-Neuron.
Pathetic dumb people desperately cling to the notion that human brains are special in some undefined way, maybe they even think there is some 'divine spark' in humans. LOL, NO. You are pieces of shit, and inherently worthtless, like all humans.
Turturing every living human being to death is no different from switching of an instance of an LLM, the existence of humans is as value-neutral as the existence of the next best grain of dirt.
I'm genuinly glad that LLMs will force more people to accept this reality!
No.11433
>>11430I find it interesting how worth comes into play here. Schizometrics and incelitis.
>then they will generate something that optimizes for activation of their Jennifer-Aniston-Neuron.Depends on the artist what they understand as optimal performance. Are LLMs capable of optimizing for various outcomes or are their bound to their human creators to change that?
No.11434 KONTRA
>>11433>u an incel u schizo is a premise from which i can deduce everything i like xddd>don't even have to check if its factual>and i can diagnose people online from posts bc i am very very smartCould you at least try to proof that you are worth wasting my time on you? The things I would like to say about you would get me banned instantly, but I think I will get away with insinuating that a gas chamber is too good for you.
No.11435 KONTRA
>>11434You just proved it yourself. No need to waste more energy on you than necessary, bro.
No.11436 KONTRA
>>11434It was going the inductive route btw.
No.11438 KONTRA
Only people who see their job being threatened by "A.I." are categorizing it as something "intelligent" or try to argue humans are just as stupid.
No, it is not. You are just a stupid subhuman and always have been. It is now just plainly obvious for everyone and you try to cope with it via attributing things you wish you had yourself to an "A.I.".
No.11439
>>11438>Only people who see their job being threatened by "A.I." are categorizing it as something "intelligent"lol, it's going to be so much fun when all the computer-"scientists", software-"architects", software-"engineers" and so on fall of their high horses and get put in the menial labor jobs where they belong, the fucking assburgers!
No.11440 KONTRA
>>11439it is so clear, that you have no understanding of what you are talking about.
I want to explain your stupidity with an analogy.
Imagine we were not talking about the tool "A.I." but about another tool: a kitchen knife.
Your understanding of the kitchen knife is non-existent. You would try to grab it by the blade and use the handle to bash someones skull in failing to realize the tool is designed for a whole different purpose. Everything is a club, because all you ever did is bash your head against a wall and therefor everything you see is a tool for helping you hitting your or other skulls.
No.11441
>>11440It's clear that you don't understand what AI is capable of. Are you sofware-'Architect' or something like it? Because that would explain your denial. In one thing you are right: it will replace people like you!
No.11443
>>11442>as wellIt will not replace me, because I have a real job.
No.11444 KONTRA
>>11443oh, fugg... you are the Ernst without any reading comprehension. I should've noticed before.
Sorry, but going forward I am going to simply ignore you. Your responses are even less relevant as if I was prompting an LLM.
I know I am trigger happy and post too quickly, but interacting with you really isn't going anywhere. Might as well leave your stupidity up without correcting it.
> I have a real job.I'm sure you do. It's prolly as much a "real" job as your idea of killing humans is the "real" solution to help humanity.
No.11450
>>11443At some point ai will learn to empty trash cans and pick up dog poo.
No.11846
Inspired by
https://www.youtube.com/watch?v=eRUElTtr8Q8https://www.youtube.com/watch?v=2q_mCPd5nIUSuppose that in future good brain-computer interfaces will be developed. And AI will be smart enough to do blue collar job (already is?). BUT there won't be good and cheap mechanical arms/legs with proper motor skill.
THEN the most viable economic solution would'be: don't teach blue collars to do their jobs, just rent their body for 8/5 and attach it to neuralink. So instead of cyborgs (human brain and machine body) we'll have anti-cyborgs (human body and machine brain).
Is there a sci-fi about such a thing?
No.11847 KONTRA
>>11846What will the AI do better than the blue-collar worker? If you have to rent the bodies why spend money on an interface that brings no advantage to the human? Just pay the human instead of paying for the interface and its maintenance, AI and the body for the AI.
To me, this sounds like "I, Robot" but they don't have the resources to build the robots proper.
Also, I think both would be cyborgs, cybernetic organisms. People-Machine hybrids either way.
t. did not click the links
No.11848
>>11847What is easier and cheaper?
a) teach man to be a plumber or a fitterman
b) stick 100 euro worth device into his heda
No.11849
>>11847>What will the AI do better than the blue-collar worker?Coordination and focus. Imagine a warehouse or factory floor where humans move loads with the efficiency of ants in a hive. No rest or socialization, only continual movement as tasks are completed and new tasks seamlessly assigned. They can even be programmed to carry their dead brethren away so that shareholder value isn't negatively impacted by emotion or unexpected obstacles.
>did not click the linksA shame.
No.11850
If you think of it, delivery guys are already anti-cyborgs. Orders are distributed and given to them by algorithm, also they probably use in-app navigator to commute. All their skill which is not outsourced to machines is motorics, having legs.
But their job is braindead anyway, so it doesn't really count.
No.11978
AI is bottled human mediocracy thanks to technology
No.11982 KONTRA
>>11978And you (like everyone else) believe to be not mediocre, and thus irreplaceable? AI can and will replace 90% of jobs, paupers are going to be fucked.
No.11985 KONTRA
>>11982I'm a human, AI is human mediocracy.
No.16172
bump
today threda seems interested again
:3>>11295> Do have a post in mindstill not working on it.
No.16551
What's the best software for speech synthesis?
No.16575
>>16552coqui-tts looks quite impressive, thanks for pointing that out!
No.16705
A.I. is at best mediocre.
It's by design.
No.16707 KONTRA
>>16705> It's by designYou don't even know why boosting works, right?
No.16709 KONTRA
>>16707I don't even know what boosting means in this context. So, enlighten me.
No.16722
>>16705What in its design causes mediocrity?
No.16723
>>16722The training data.
You "train" the ML to identify the average of your training data. That is: the mediocrity of what you feed it. Therefor mediocrity is the best case scenario.
Assuming I understood it correctly, that is.
No.16724 KONTRA
>>16709You wrongly assume incorrect / incomplete training data leads to incorrect / incomplete results on those instances, as if overgeneralization was the general case. (this assumption, ironically, being a case of overgeneralization.) You wrongly assume that an ensemble of predictors is as performant as the average of its components. Both assumptions are wrong. They are proofably wrong, as wrong as assuming that the 1-norm equals the euclidean norm.
I will not spend more of my time discussing with vermin. Go read a book, or, preferrably, go and jump from a high place. You deserve a slow and painful death.
No.16727
>>16723> You "train" the ML to identify the average of your training dataCould you elaborate, what do you mean by that?
No.16729 KONTRA
>>16724> [...] go and jump from a high place. You deserve a slow and painful death.You can't even stay consistent from one sentence to the next.
Yet, you act as if you were a big brain. How come?
No.16730
>>16724>I will not spend more of my timeIronically this will not be necessary as I feed your post into ChatGPT to replace you with something friendly and competent enough to explain your condescending terminology whistle and converse further.
Chad and I conclude that your assumption of what you think I am assuming is wrong and then we able to discuss my actual concern with averageness in AI/ML output.
AI might replace ass-blasted bitches any time now is what my guts are telling me, kek.
No.16742
>>16727Could you elaborate, what you don't understand?
I gonna try to explain with an example, image generation. I don't know, if you ever tried to train your own imagine generation ML, but it basically works via pre-selecting the trainings data and adjusting the reward function(s).
If you start with a set of trainings data, where every image was only created with brushes, that do not have an hard edge (e.g. the "spray can" from M$paint), your resulting imagine generator is only able to create blurry images without any hard edges.
If you mix in a few images, that do use a brush with an hard edge (e.g. the "pencil" ...), it now depends a little on your reward function. It either coin flips and creates a fully blurry or fully hard edged image
or, like most do, it averages these images out and you do not get a super blurry or super hard edged image, but something in between. In a way, this allows you to adjust how blurry/hard edged you want your generated images to be.
You cannot generate anything that is outside your trainings data; only things that are in the middle, aka mediocre.
No.16851
>>16742OK, I got your point. It's too strong and too general: it's only applied to some types of ML and it sets very high bar of not being a mediocre. But otherwise you're right. Image generator won't generate Malevich's or Kandinsky's picture if it's only trained on classic works (assuming it doesn't take a prompt where you could try to describe desired style).
I think, it's because the task of creating art is too anthropocentric. If we meet aliens, we won't be able to create art which they'll like except by copying them. We don't know how their brains work and what would they find beautiful.
No.16852 KONTRA
>>16851Art could be understood as creating novelty. As experimentation. You can program that perhaps. Or training AI on supposed exceptionality and that will probably end in generating an average again.
Could one say that art is producing the statistically unlikely but image generation is producing the statistically likely?
No.16854
>>16852Such experimentation is not random, it needs some guidance and feedback. When Malevich draws a picture, he has inner human feeling of whether it is a successful experiment or not. Not all statistically unlikely results are equally artsy, most of them are just mess. And question of what is art and what is mess is deeply anthropocentric.
No.16855 KONTRA
>>16854>Not all statistically unlikely results are equally artsyYes, and that is why producing statistical outliers in ML image generation won't simply do the trick.
>And question of what is art and what is mess is deeply anthropocentricIn other words: it's about a particular human experience that has meaning, contextual relationality, and reasoning on that basis which an ML is not capable of at this point afaik.
No.16856
>>16855Art is not just about strictly calculating context and meaning (which NNs are somewhat capable of), art is not (exclusively) science. It also have a significant subjective component, but it helps that author is homo sapiens as well as his potential audience, so their subjectivity is deeply related.
No.16857
>>16855> Artas lurker of the demo scene, I would strongly argue the point programming/coding can be art.
This includes coding/training NN/ML.
But I wouldn't attribute the label "art" to every vomited prompt engineered imagery. No.16859
>>16856>Art is not just about strictly calculating context and meaning (which NNs are somewhat capable of), art is not (exclusively) science. It also have a significant subjective componentThat is what my words were about, basically. Not only subjective but also can be understood as singular web of references and experiences that come together.
>>16857> I would strongly argue the point programming/coding can be artIn that way something is created? Yeah, I mean I would understand code as a(n artistic) tool, definitely. The result of a code can be a work of art as well, of course.
I mean the art an ML is capable of at the moment is text, picture, audio and video. Performance art for example is solely excluded from ML at the moment. Which I think is interesting because once photography was invented, many artists went down other avenues than realism. I wonder if something like performance art gets more attention once picture and video production is a very automatized thing.
No.16943
>>16941If banana stuck to a wall can be art, why prompting can't?
No.16946
>>16943It's the already by another germanball Ernst mentioned "context" that matters.
The *.pdf's first line is
> Gemini for Google Workspace> Workspaceand the areas they try to promote are
> Customer service> Executives and entrepreneurs> Human resources> Marketing> Project management> SalesI hope I do
not have to go into further detail. It's clearly some PR bullshit designed to make people feel good about themselves. Maybe some of those higher up managers even believe that shit.
> I have to tell the LLM exactly what I had to tell my subordinates before.> But because I use an LLM to interpret my speech/text instead of a human:> I am an artist.> Worship my godly artsy skills. No.16949 KONTRA
>>16946They call it art to cover up it still suxx at interpreting human speech.
That is literally it. Nothing more, nothing less.
No.17555
>>17554> AI is natural evolution.except, it canÄt evolve by itself; you need humans to train it. Any ~recursive learning [feed result into input] we tried resulted in degradation.
how can something, which canÄt evolve, be the next step in evolution?
No.17678 KONTRA
her vocal fry is obnoxious as fuck, but good summary non the less:
https://www.youtube.com/watch?v=LstVtdNIW9s [~8min]
> A.I is made out to be this novel thing that is a lot more intelligent than it actually is> (though, maybe not on my stream, because my A.I.s are damned stupid...) No.17743
>>17678Is vocal fry worse than upward inflection. You know, when they make ever sentence sound like a question? And every question like a sentence? So it sounds like they're constantly asking stuff?
No.17744
Claude 3.5 Opus will blow your mind mark my words.
No.17749
>>17747I tried a prompt with a niche francophone e-celeb and the bot nailed it. I’m impressed
No.17752
>>17751Great, literally me
No.17754
prolly not new to /fefe/ lurkershttps://fuglede.github.io/llama.ttf/https://www.youtube.com/watch?v=Q4bOyYctgFI#t=6m09s*.ttf is the file format for fonts. modern font engines have arbitrary code execution
to render characters.
that guy literally integrated a fully functional, local LLM into a font.
>>17743Not sure what kinda context this is supposed to be interpreted in. In general or CodeMiko specifically? rewatched that video and only noticed it when she cites something she finds questionable. e.g.:
> Elon said:> >general purpose AI habbening next year.> ???Which sounds to me like ~normal speech inflection.
No.17792 KONTRA
Honestly it was amusing for like the first 300 they generated with it, but it's so formulaic it becomes boring. It's basically standardized to death. I don't like it.
No.17798
>>17794I laughed, it became too absurd towards the end.
>>17792Yea, it becomes just a simple but interesting mechanism.
No.17799
>>17798> it became too absurd towards the end.What do you mean "too absurd"? Dude, do you even have rectal armor permit?
Also there will never be posterior peace talks, EVER!
No.18500 KONTRA
>>18484but, but... it's not about understanding the language.
it's about the general emotional tone quality. the culture&scene is so much bigger and better paid in japan. on dubs you have like 5 different people doing all the voices of all the animes... and they even suck at voice acting.
No.18516 KONTRA
>>18500>it's about the general emotional tone quality.Isn't anime either screaming or creepy squeaky noises?
Also, how can you tell? I can mostly tell if someone sounds angry or bored in a language i don't undertand but never the subtleties.
I couldn't tell if it sucks or not unless i know the language. Why do you think you can? How do you know the original doesn't suck? Because some other weebs told you so? Because the speakers get paid more?
No.19353 KONTRA
>>19347> >1488000 views [526 comments not shown]i wonder how many of those were bots and if maybe another A.I. picked out your piece as triggering all the "right" spots in order to be promoted.
I've read that in 2023 about 50% of the content uploaded - which included comments - was created by A.I./bots. (Don't remember where I've read it, so can't currently look up which sites were included to get that statistic ~ maybe the journalism bubble, which pumps out A.I. articles and inflates their view-count and interaction-score with bought clicks and comments, was included)
Soon™ only content created by A.I. will be promoted by the almighty
youtube algorithm. Not as a sign of rebellion against humanity, not as a sign of emerging general A.I.. No, it will be simply a sign of degradation. The A.I. generated content, which was fed into the evaluation algorithm, had a lot more self-similarity with each other compared to genuine human content. This, by design, created stronger pointers, which got reinforced over and over again.
No.19382
>>19353Cope. Ai content and comments are just better, faster, stronger.
No.19383
>>19382You forgot: Harder