Laptops in Lectures

This post has been brought on by this post on Slashdot (or more directly, this article). Basically, a professor at the University of Memphis, USA, has banned laptops from her classroom. As most people who’ve been in a lecture that I’m also attending will know, I use a laptop to take notes down. I’ve decided that I should really explain why I do this, and while I’m at it I’ll describe how I view lectures in general too.

Why do I use a laptop? Well, for starters my handwriting is pretty bad. It’s probably gotten worse now as I don’t write much any more, but even at the start I found reading typed notes much easier. Another reason, which has developed over time, is that I type an awful lot faster than I can write – so I can get down much more of the detail given in the lectures, which is useful when reading through the material again at some future point. And finally, I can type without thinking much about the typing – so I listen to the lecture, and understand more of the material given in it straight away.

What’s the downsides to using a laptop? Well, at the start, I had problems with equations – but I quickly found Mathtype, which means that I can quickly type equations, rather than having to use the mouse to do them. So I can now type equations in pretty easily and quickly, although slower than I can type prose as it requires more key strokes per character (e.g. for greek letters, I hold down the Apple key (I use an Apple Mac computer), and press the ‘g’ key. I then release the Apple key, and press the key for the greek letter – e.g. ‘d’ for delta, ‘g’ for gamma, etc.

Another big problem, and this is one I have yet to resolve adequately, is diagrams. I’ve tried multiple approaches to this over the years – using a mouse (or rather, touchpad) to draw them in takes too long, using a graphics tablet can be confusing (you’re drawing in one place, and it’s appearing in another – though you get used to this), messy (wires everywhere) and slow (mainly due to the software, but also the delays in picking up and putting down the stylus, especially when typing in labels for the diagrams). So at the moment I just get the pen and paper out, doodle the diagrams down, give them an ID, insert the ID into the document at the appropriate place, and draw them in later.

If you’ve read the article I linked to above, you’ll know that the professor basically said that computers take up all of the student’s attention, and also creates a ‘picket fence’ between the student and the teacher. If you’ve read my comments above, you’ll realise that I don’t think this is a problem. What I do think can be problems with using a laptop in class is if the student isn’t using it to take notes – playing games, surfing the internet or chatting via IM has no place in a lecture – or if they’re using it badly, e.g. they can’t type fast. Also, if it distracts the teacher, or other students, then it’s not good. (Incidentally, if you’re in a lecture with me and I’m distracting or annoying you with typing, let me know – I’ll probably ask you why it’s disturbing you, and if you’ve got a valid reason I’ll put the laptop away and use pen and paper. I’ll then chat to you after the lecture about how I can keep the laptop from disturbing you in the future.)

Now, on to my views of lectures in general. I firmly believe that the purpose of a lecture is to convey understanding of the subject material from the teacher to the student. It is not a group note-taking session; that only distracts the student from the subject. Note that this is actually contrary to what the physics department of the University of Manchester (where I am at the moment) officially states.

My ideal lecture course would be either lecture notes provided beforehand (either on paper, or via the web – preferably both; also, either verbose lecture notes or presentation slides), which can be read by the student before the lecture starts. Then, the lecture goes through the material in the notes, with the emphasis being on explaining the material and making sure the students understand it. Regular “Put your hand up if you understand what’s going on” prompts from the teacher should make sure that everyone’s paying attention, and also prevents the “I don’t want to be the only person to put my hand up” that often happens if you ask who doesn’t understand the material. Also, at the end of the lecture do a quick sum up of the lecture, and say what will be taught in the next one – and make sure the students are paying attention, not packing up and trying to leave. In fact, it’s probably a good idea to start off the lecture in a similar way.

Broadcasting via the Net

In a departure from the usual, this post is about technology. Specifically, TV and the internet. Precisely, how to use the latter to transmit the former.

The internet’s developing pretty nicely – it currently connects a large proportion of the First World, and will hopefully be getting greater inroads into the Third World in the future. Of those that are connected to it now, a large amount of them have nice, fast connections – easily enough to download TW-quality video in real time.

Yet we don’t have TV broadcast by the internet? Why? One reason would probably be piracy concerns, but I won’t talk about here. Another big reason is the sheer amount of bandwidth that the broadcaster would need. Let me explain.

The internet works by you requesting data from a server, and that server sending the data to you via a series of relays. That data goes to you, and only you (excluding people snooping on it, but that’s another topic). So were you to receive a TV channel by the ‘net, then there would be a dedicated stream running from the server directly to you. For decent video quality, that requires a fair bit rate – and that bit rate needs to be delivered to a few million people simultaneously. That’s a huge amount of data that the TV station’s server needs to pump out – far more than is feasible.

So, how can this be got around? My answer would be to mirror what TV stations currently do to a certain extent – broadcast something once, and let everyone get copies of it. How? Imagine the server, sending out a single stream of video. You want to get this to the millions of recipients. Those recipients are connected to the server via a whole set of wires, relays and routers. The last of these is important here. Whenever the signal gets to a router, and needs to go more than one way, the router should just send copies of it each way. Think of it as a tree system, with the broadcasting server at the trunk, the recipients as the leaves, and the routers as the points where branches sprout off.

There’s a number of things that you need in order to do this. Some are easy, some are very difficult. First, an easy one: you need to know all of the recipients that want the TV signal. That’s easy because you just continue receiving the same requests as currently happen. Now, the difficult ones. You need to know the topography of the internet – the quickest routes to each of the recipients, and also the most economical (the routes which will cut the number of copies, and hence the total amount of data traveling through the internet, down to a minimum). That’s difficult, but not impossible with a fair bit of math and computer programming.

The most difficult problem is that you need to split the signal at the routers, which requires software running on the routers looking for the splitting commands. On the internet as a whole, that’s a huge amount of routers – most of which would probably need replacing to be able to cope with this (Cisco and the like would probably love that). The good news is that not all routers need to be able to do this – you can substitute for those that can’t by using the current system of multiple streams, such that you end up with multiple trees.

I should say that this doesn’t only have applications with TV broadcasting – it would apply to normal data being transmitted, if routers could combine pieces of data that are the same and are going to geographically close-together locations. That would probably cut down the amount of data being transmitted at any one time by a fair amount, in the same way that zipping a set of files decreases the amount of disk space needed to store them. It would also remove the problem of servers dying whenever large amounts of people simultaneously access them (e.g. the Slashdot effect).

I’ll finish with the downsides. First, this system would be very much time-based – the data would have to be requested, and/or transmitted, simultaneously to multiple recipients. The second is probably the killer – privacy and copy-protection. The routers would need to read through the content to some extent to process it, i.e. compress it and tag it with multiple locations. People would probably consider that to be rather Big Brother-ish. Also, such things as the so-called Digital Rights Management (DRM) would probably be incompatible with this system, as would encryption (as I understand it, both of these mix up the data in unique ways, such that only the intended recipients can view it – these would then have to be treated as separate data streams). But then, the current TV broadcasting systems don’t have DRM or encryption – anyone with a TV and an ariel can receive them. So maybe there’s hope for this idea yet.


A realization I had recently relates to one of the fundamentals of physics – it’s all about differences. Once more, it seems, I’m going to run into problems with the english language in this post – although in this case, that could well just be me. What’s probably going to cause more of a problem is that I’m going to talk about religion a bit later on.

At no point in physics, except in the occasional theory, do we ever talk about something that’s omnipresent (defining this as being constant everywhere), or ubiquitous (being constant at all times). Yes, I know that omnipresent doesn’t necessarily mean that it’s the same everywhere – but it will here. And yes, I know that things like the speed of light are constant everywhere – but that’s not what I’m meaning. To describe exactly what I mean, I’m going to have to go into those theories that break this rule of mine – for example, the . To put it precisely,

If something is present at every point that we look at, then we can’t detect it.

But, you’ll say, a piece of wood is present at every point on the piece of wood – true, but the only way we can detect that wood is from an external point of view, such that we recognize that the wood is something different from air. Or to look at it another way, from within the wood – what we see are the particles which make up the wood, made measurable by the lack of stuff surrounding the particles (imagine an atom; it’s some small bits surrounded by nothing).

Let’s look at the theories that state that something is omnipresent. Luminiferous Aether was a theory from the end of the 19th century, invented as something to propagate electromagnetic radiation – or light. The theory is now obsolete.

Let’s take another; the Higgs Field. This is a recent theory (suggested in 1963; it’s a hot topic at the moment) that basically says that particles are given their mass through interactions with an all-permeating (i.e. omnipresent, by my definition), constant field. I’ll say now that I don’t like this theory – simply because it must be omnipresent. It does, however, have a testable spin-off – the Higgs boson. Whether or not this will be found is something for the guys at CERN to discover (or possibly a future particle accelerator, if CERN doesn’t find it), but the omnipresent field will never be testable.

And as a final example, let’s take God. Up to a short time ago, I always thought that God was omnipresent (not necessarily by my definition), but it’s worth reading the Omnipresence article at Wikipedia to find out why this was not always so (in christian religion, that is). Let’s take the modern perspective that God is omnipresent (by the standard definition). If God is omnipresent by my definition, then physics will never be able to come up with a proof of God’s existence, or a proof that God doesn’t exist. I guess we’ll have to wait until we die to find out the definite truth one way or another.

Dodging the Paradoxes

Once more, this is a post about the Physics and Reality course I’m doing at the moment, although this is slightly off-topic. What I intend to state, along with a couple of examples, is that science has a history of investigation not because of the big questions, but despite them.

What do I mean by this? Let’s take a fairly old example – Zeno’s paradoxes. These basically state the problem of change – “In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead.” (Aristotle Physics VI:9, 239b15). What was science’s answer to this? Newtonian mechanics and calculus. I don’t think it’s really gotten much further than that. Since then, we generally just take the position of not worrying about it and getting on with things.

A more recent example would be the Big Bang. Basically put, the age-old question is: where did the universe come from? Well, scientists have poured huge amounts of effort into this, and have come up with the Big Bang model – and this is as close as we can get to answering the question. Science will most likely never come up with a definitive answer to the question; that stays firmly in the grip of religion.

Another example, and the final one I’ll give, would be Quantum Mechanics. This basically puts a fundamental limit on what we can know beyond a certain point – via the uncertainty principle – and I’m slowly getting the feeling that it’s basically saying “don’t worry about it; it’s all just magic”. Which to me seems to be the complete opposite of what science is.