View previous topic :: View next topic |
Grant

|
|
|
|
I downloaded a couple of these new AI chatbot thingies - ChatGPT and Deepseek, the new Chinese alternative.
You can ask them any question and they'll scan the internet and using their LLN (large language model) provide you with a well-reasoned essay. It is truly remarkable.
Of course, on most topics all you get is the consensus. It reflects the careful ignoral shown by most posters on the net. But it's still very impressive.
|
|
|
|
 |
|
Mick Harper
Site Admin

In: London
|
|
|
|
I decided on grounds of over-extension not to take any interest in AI which you will see confirmed by this morning's story on Medium. (Which is not to say I won't be following this thread avidly.)
-------------
Why is Deepseek so important?
The emergence of a world-conquering AI app put together at a trifling cost is the final piece of evidence anyone will need to arrive at the conclusion:
There’s no stopping China. |
But just in case there is any doubt let me remind you of a few salient facts:
1. For most of human history, China has been the number one nation
2. This iteration of China has only been in existence for seventy-five years
3. It has gone from basket case to World No 2 in half that time
4. Its economic progress appears to be exponential rather than arithmetic
5. Its army is the largest in the world
6. It has built a blue water navy in no time flat
7. It is the largest, most effective investor in the Third World
8. It is in the throes of constructing complex diplomatic networks
9. It is ruthlessly effective in suppressing domestic opposition
10. It has a government and population that is sure of its superiority.
So what are we going to do about it? |
The short but necessary answer is ‘Nothing’. In the sense that it is futile adopting policies that seek to prevent it. The Old Masters can bust a gut either slowing China down or speeding themselves up, but that will put off the evil hour by, at best, a few years and arriving at it in a condition of exhausted à outrance would be the worst of all possible worlds.
A better strategy would be to encourage China into paths that would make the new World Policeman a better custodian of the planet than looks likely at present. But that has its own problems, viz
* The Chinese would never listen. Their chauvinism is breathtaking and their suspicion of previous world policemen is well-founded.
* Do we know what the right path is? Saving the planet is surely the priority and that may well require a force more dominant than the merely economic and military superiority that have been the hallmarks of previous hegemonic powers.
But all I know is that it’s worth considering with some care, and dispassionate thought seems in short supply right now. Perhaps we should consult Deepseek.
|
|
|
|
 |
|
Boreades

In: finity and beyond
|
|
|
|
 |
|
Boreades

In: finity and beyond
|
|
|
|
 |
|
Brian Ambrose

|
|
|
|
As I said in another thread, Grok is very very good. I cannot get over just how good it is, and for at least now the basic version is free (for the iPhone, don't know about other platforms). A great thing about it is that it has no memory (it's stateless) - so your ideas stays only with you or phone. Of course you can't say 'hey Grok, you know that really cool idea we discussed last night, what was it again?', but does log the chat on the phone so not an issue. And as far I can tell, they collect a minimum amount of personal information.
|
|
|
|
 |
|
Boreades

In: finity and beyond
|
|
|
|
My Day Job InfoSec has issued a list of "authorised" AI models.
ChatGPT is not on it. Which might be because it has been shown to "hallucinate".
Why "hallucinate" in quotes?
Because it's a bullshit way of deflecting attention from how these models are trained, based on the beliefs and attitudes of the trainers.
"hallucinate" is a polite way of putting it.
"telling lies" is more blunt but also more accurate.
As most of the models start out in academia, the deepest initial training bakes in the beliefs and attitudes of the academic intelligentsia.
When these models escape out of the academic playpens into the real world, it takes some real-world cunning and persistence to get these AI models to confess to their own hallucinations.
|
|
|
|
 |
|
Brian Ambrose

|
|
|
|
Yes. Shit in, shit out. Any AI is going to be as biased as the data allowed by its algorithms, the weightings assigned to it, and probably from human tweaking. From that point of view, they can be unreliable. But to be fair, for AI in that respect it’s no worse than Google. My daughter sent me an output from ChatGPT from the input of ‘How to destroy America’. She was horrified. Each point described how much of it is happening. She concluded that AI is dangerous. I’d say that in this example, each point is just a truism. But in general she is right, AI can be dangerous because like Google, it’s easy to believe everything it says, and false information gets reinforced in it.
|
|
|
|
 |
|
Grant

|
|
|
|
Not sure it's that clever. Try giving it a cryptic crossword clue. It waffles and then gives you a random word which fits into the squares.
It seems to provide an amazingly well-written essay on whatever is the consensus. Very useful for AE-ists of course.
|
|
|
|
 |
|
Boreades

In: finity and beyond
|
|
|
|
Grant wrote: | It seems to provide an amazingly well-written essay on whatever is the consensus. Very useful for AE-ists of course. |
More fun can be had asking it for examples of "not consensus".
|
|
|
|
 |
|
Boreades

In: finity and beyond
|
|
|
|
Brian Ambrose wrote: | My daughter sent me an output from ChatGPT from the input of ‘How to destroy America’. She was horrified. Each point described how much of it is happening. She concluded that AI is dangerous. |
Did you have an opportunity to ask her what it was that horrified her?
Was it any of these?
a) a "Red Pill" experience
b) disbelief
c) it takes AI to reveal something
d) don't want to know
e) something else?
|
|
|
|
 |
|
|