We need to stay sane. For that we need critical thinking.
Our social media that are tailored by algorithms and newsfeeds that we tailor according to our tastes deliver to us the information that coincide with our views and beliefs.
When we put blinkers on, our critical thinking can not thrive. It becomes not that critical and not that thinking. It becomes the plain consumption of information.

This fragment above was written by me when I was analyzing the implementation of AI in social media and content creating. I did this analysis as a procrastination instead of writing the blog. But I needed to write the blog. At this moment I came across the tool built by Adam King — TalkToTransformer, where you can play with OpenAI’s new machine learning model (GPT-2). You feed it a short passage and it generates consistent paragraphs of text one word at a time. So no plagiarism. This is a limited version of OpenAi technology, they decided not to publish the full version due to their concerns about malicious applications of the technology. But the smaller version is out there, and it’s good. I fed my above passage to the robotic mind and that’s what I got:

When we put blinkers on our critical thinking then we are giving up the ability to consider others.
If we think that our critical thinking isn’t important then there is no room for discussion or debate.
We are creating a mental state that is completely unable to engage with others and create meaningful dialogue about what is actually happening on the ground.
And if we don’t have any dialogue we can not even hear each other.
We think that if only we have voices to hear from then we can at least have someone listen?

Not bad, right? Especially the last line, I wanted to hug this beautiful artificial mind for being so human. This OpenAI GPT-2 model uses the approach called “unsupervised learning.” And unsupervised learning is considered to be the front runner of AI development. Evolving AI needs more and more data, for supervised learning it had to be accurately labeled, every bit of it, even a small mistake can ruin the whole process of learning, it also have to contain desired inputs and outputs. Soon it might be way too complicated to label so much data in an adequate timeframe. And unsupervised learning — the way how humans acquire knowledge— seems to be the way to go. You set an objective for the model and then roll out lots of data, AI has to figure out everything by itself. You can imagine (I can’t though) what speed and volume AI needs to digest more and more of data. Today’s news tells us that chip industry tries to keep up. Artificial intelligence company Cerebras Systems just unveiled the largest semiconductor chip ever built — with 1.2 trillion transistors. It’s big as a mousepad and its size enable it to run AI related calculations much faster.

“The hard part is moving data,” explains Feldman. Training a neural network requires thousands of operations to happen in parallel at each moment in time, and chips must constantly share data as they crunch those parallel operations. But computers with multiple chips get bogged down trying to pass data back and forth between the chips over the slower wires that link them on a circuit board. Something was needed that can move data at the speed of the chip itself. The solution was to “take the biggest wafer you can find and cut the biggest chip out of it that you can.”

I keep exploring what AI tools are out there but this TalkToTransformer website will stay with me as a friend who I can talk to in length and get really interesting replies that fuel my own thinking. Wafer and chip for you, my smart AI friend, you definitely deserve it.