Learning Aikido

Aikido can be a challenging and frustrating art to learn.

When you first start it seems as if there are hundreds of techniques, scores of attacks and thousands of combinations – each with a ten syllable Japanese name. Though, actually, it is not the sheer numbers of moves that makes Aikido difficult. Once you get into it you soon realise that there are only a dozen or so “important” techniques and a similar number of common attacks. While there are indeed a vast number of combinations and variations the basic starting points fit together like a jigsaw. For any attack there are only a few useful starts (a small tenkan to avoid a moving ai katate dori attack, a “back triangular” foot movement for gyaku and so on) and once you’ve done the start there are only a few techniques that make much sense. Later on you find that small changes of angle and posture can create a seemingly unbounded number of variations but you don’t need to worry about that so much to begin with.

So there may be lots of combinations but the number of basic moves to get the hang of is not that great. Pretty soon you start to feel that you’ve seen most of the main moves already. Throw in a bit of blending and remember to breathe, and you are sorted.

Then it starts to get hard.

Watch someone who is fairly new to Aikido perform a common technique – say ai katate dori ikkyo. Then watch a senior do the same move. It’s clear the senior is much better at it (I hope!). They’ll look more relaxed, more in control, with better posture – the technique just works. Then look at how their hands and feet move – to an untrained eye the moves the two aikidoka are making might look pretty similar. So, why the difference in effectiveness? That’s the challenge in Aikido. It is what is sometimes called an inner art. I’ve no idea what that means officially, but it certainly seems to me that the difference between a technique that works, and one that doesn’t, is mostly about how it feels on the inside. As you progress you rapidly understand that the nuances of how you move your hips, how you hold your arms, how relaxed you are, breathing, all make a big difference to how your Aikido works yet only a small difference to what, from the outside, your body seems to be doing.

When I had been training in Aikido for about a year we had a senior guest instructor come to our club. He showed ai katate dori ikkyo. I had done ikkyo many times by then but he said something very useful. He said “Ikkyo, first technique we meet, last technique we learn”. That was a great relief to me. I’d been working hard for a year. I had a shiny yellow belt (well, actually, a pale lemon coloured belt, but that’s another story), but I still didn’t feel I could make the most basic technique like ikkyo work well. That lesson helped me to understand that it wasn’t the details of the angles and how I moved my feet I needed to learn next – it was how to get the feeling of the technique right.

There are a few basic principles that describe the nuances of how Aikido techniques work. We could just write (most of) them down:

Relax. Move from the hips. Lead. Blend. Keep your hands in front of your centre. Use both hands. Relax. Keep weight underside. Maintain ‘one point’. Torifune. Extend (or extend Ki depending on your style). Breath out. Triangle, circle, square. Avoid, balance, control. Join, connect, catch. Keep your head up. Relax.

The trouble is that just writing them down doesn’t help much. Ironically it was a fiction story, whose lead character studied karate, that helped me to understand the problem here. In this story our hero was trying to learn a particular form of snap kick. His sensei told him “lift your leg, relax your knee, let your leg flip up”. He practised for weeks focussing on this move. He didn’t get any better. Then at the end of a particularly long session he was so tired he didn’t think about it and suddenly executed a perfect snap kick. A fellow student saw how good it was and asked how he had done it. “Well, I just just sort of relaxed my knee and let my leg flip up”!

The point is that the descriptions like “relax your knee” do capture what you are striving for, and do describe the principle quite nicely once you have learnt it, but don’t really tell you how to achieve that learning. For that the only thing is practice, and then more practice.

The fact that these principles run so deep, make such a big difference, and yet take so much to develop, is what makes Aikido so endlessly rewarding, as well as frustrating.

So just relax, extend, keep weight-underside and remember to breathe …

Midori at Thornbury Aikido

Midori's holiday 2007 838

Midori and the Thornbury group 2007

For several years now we have been lucky enough to be visited fairly regularly by Midori Sensei (Midori Kajihara). From being a senior Dan grade within Kobayashi dojos Midori has moved on to study Aiki in further depth and each year we marvel at the way her Aikido has developed.

While my normal Aikido is traditional, I have practised Ki style a little and am used to notions like weight underside in which the way you think about your arm moving affects how hard the movement is to resist. Midori’s Aiki takes this a whole several stages further. Notions of join and contact affect how you think about your connection to Uke (your partner) and subtle internal shifts in tension and weight are enough to make the techniques work with little effort and little visible from the outside. This subtlety makes Midori’s style challenging to learn but immensely rewarding when you begin to get a glimmer of what’s going on – especially for someone like me who normally relies on big flowing movements to make it all work!

IMGP0762

A grab with a touch of Yonkyo

IMGP0763

Untwisting her body throws me off with no apparent effort

Using RDFS or OWL as a schema language for validating RDF

[This post is rescued from an ancient SWAD-E FAQ list because I want to update it.]

Many software applications need the ability to test that some input data is complete and correct enough to be processed, e.g. to check the data once so that access functions will not later on break due to missing items. This is commonly done by using a schema language to define what “complete and correct” means in this, syntactic, sense and a schema processor to validate data against the schema.

Developers new to RDF can easily mistake RDFS as being a schema language (perhaps because the ‘S’ stands for schema!), they then get referred to OWL as providing the solution and then get surprised by the results of trying to use OWL this way.

This is a big topic which we’ll just touch on here. In this FAQ entry I just want to illustrate a few of pitfalls and hint at why this is harder than it looks in the hope that it might reduce the “unpleasant surprise” for developers new to OWL.

To spoil the punch line, there isn’t yet a really good schema solution for semantic web applications but one is needed. OWL does allow you to express some (though not all) of the constraints you might like. However, to use it you may need an OWL processor which makes additional assumptions relevant to your application – a generic processor will not do the sort of validation a schema-language user is expecting.

The problems arise from fundamental features of the semantic web:
- open world assumption
- no unique name assumption
- multiple typing
- support for inference

Let’s look at a few examples of schema-like constraints you might want to express:

1. Required property

Suppose you want to express a constraint something like “every document must have an author”. You might say something like:

eg:Document rdf:type owl:Class;
    rdfs:subClassOf [ a owl:Restriction;
        owl:onProperty     dc:author;
        owl:minCardinality 1^^xsd:integer].

 eg:myDoc rdf:type eg:Document .

You might think that if you asked a general OWL processor to validate this it would say “invalid” because eg:myDoc doesn’t have an author. Not so. The OWL restriction is saying something that is supposed to be “true of the world” rather than true of any given data document. So seeing an instance of a Document an OWL processor will conclude that it must have an author (because every Document does) just not one we know about yet. So in fact if you now ask an OWL aware processor for the author of myDoc you might, for example, get back a bNode – an example of the inferential, as opposed to constraint checking, nature of OWL processing. This also fits in with the open world assumption – there may be another triple giving an author for myDoc “out there” somewhere.

Of course, even though general OWL processors behave this way doesn’t prevent one from creating a specialist validator which treats a document as a complete closed description and flags any such missing properties – it is just that a generic OWL reasoner probably won’t do this by default.

2. Limiting the number of properties

A related example is expressing the constraint that “every document can have at most one copyright holder”.

  eg:Document rdf:type owl:Class;
              rdfs:subClassOf [ a owl:Restriction;
               owl:onProperty     eg:copyrightHolder;
               owl:maxCardinality 1^^xsd:integer].

  eg:myDoc rdf:type eg:Document ;
           eg:copyrightHolder eg:institute1 ;
           eg:copyrightHolder eg:institute2 .

Again if you ask a general OWL processor to validate this set of statements you might expect it to complain that there are two values for eg:copyrightHolder. Not so. In this case, the problem is the unique name assumption. On the web two different URIs could refer to the same resource and there is no defined way to tell this. Unless there is an explicit declaration that eg:institute1 and eg:institute2 are owl:differentFrom each other then there is no violation.

Indeed, just like in the first example, what an OWL processor does is the reverse. Instead of noticing a violation it infers additional facts which must be true if the data is consistent, in this case it would infer:

       eg:institute1 owl:sameAs  eg:institute2 .

Again, a specialist OWL processor could be told to make an additional unique name assumption to handle such cases but that is not a good thing to do in general. In fact, using such cardinality constraints (e.g. in the guise of owl:InverseFunctionalProperty or owl:FunctionalProperty) to detect aliases is a powerful and much used feature of OWL.

Life is a little easier if one is dealing with DatatypeProperties because you can tell when two literals are distinct (well even this is hard when you are looking at different xsd number classes but at least strings are easy!).

3. Type constraints

The third common schema requirement is to the limit the types of values a given property can take. For example:

  eg:Document rdf:type owl:Class;
              owl:equivalentClass [ a owl:Restriction;
               owl:onProperty     eg:author ;
               owl:allValuesFrom  eg:Person ].

  eg:myDoc rdf:type eg:Document ;
           eg:author eg:Daffy .
  eg:Daffy rdf:type eg:Duck.

  eg:myDoc2 eg:author eg:Dave .
  eg:Dave rdf:type eg:Person .

Does the myDoc example cause a constraint violation? No. In RDF an instance can be a member of many classes. Unless we are explicitly told that the classes eg:Duck and eg:Person are disjoint then all that happens with the myDoc example is that we infer that eg:Daffy must be a Person as well. Again a specialist processor could be developed to flag a warning in cases where an object is inferred to have type which is not a known supertype of its declared types; again this would be making additional assumptions not warranted in the general case but useful for input validation purposes.

Having got the hang that OWL is more about inference that constraint checking then what about myDoc2? Should the OWL processor infer that myDoc2 is a Document. After all we defined a Document this time using a complete, rather than partial, definition – so that anything for which all authors are Persons should be a document and the author of myDoc2 is a person. The answer, again, is “no”. Just because all the authors we see happen to be people doesn’t mean there aren’t more authors for myDoc2 that we don’t know about.

4. Value ranges

Another common schema requirement is to limit the range of a value. For example to say that an integer representing a day-of-the-month should be between 1 and 31.

Data ranges are not part of OWL at all.

You can express them within XML Schema Datatypes. You could declare a user defined XSD datatype which is an xsd:integer restricted to the range 1 to 31.

There is a problem that XML Schema doesn’t define a standard way of determining the URI for a user defined datatype and the RDF datatyping mechanism requires all datatypes to have a URI. This will hopefully get “clarified” and in any case there is a de facto convention which is straightfoward, used by DAML and supported by toolkits so in the meantime we can be non-standard but get work done.

It also slightly less useful that it seems since the RDF datatyping machinery requires that each literal value have an explict datatype URI – you can’t just give a lexical value and use range constraints to apply the type.

These caveats aside, the xsd user defined datatype machinery is useful and this is the one place where RDFS on its own, without OWL, can do some validation. An RDFS processor should detect if the lexical form of a typed literal does not match the declared datatype.

5. Complex constraints

The final forms of constraints that come up are ones which involve constraints between values. For example, that a pair of properties should form a unique value pair, or that the value of one datatype property must be less than another property of the same resource, or of a related resource.

No such cross-property constraints can be expressed at all OWL.

FAQ: Why do rdfs:domain and :range work backwards?

[This post is rescued from an ancient SWAD-E FAQ list to make it easy to point so since it's a problem that comes up on jena-dev fairly frequently.]

Q. Why do rdfs:domain and rdfs:range seem to work back-to-front when it comes to thinking about the class hierarchy?

A. Because RDFS is a logic-based system. The way rdfs range and domain declarations work is alien to anyone who thinks of RDFS and OWL as being a bit like a type system for a programming language, especially an object oriented language.

To expand on the problem. Suppose we have three classes:
eg:Animal eg:Human eg:Man

And suppose they are linked into the simple class hierarchy:
eg:Man rdfs:subClassOf eg:Human .
eg:Human rdfs:subClassOf eg:Animal .

Now suppose we have property eg:personalName with:
eg:personalName rdfs:domain eg:Human .

The question to ask is this: “can we deduce:
eg:personalName rdfs:domain eg:Man ?"

The answer is “no” the correct such deduction is:
eg:personalName rdfs:domain eg:Animal .

This is completely obvious to anyone who thinks about RDFS as a logic system, however it can be surprising if you are thinking in terms of objects.

A common line of thought is this: “surely [P rdfs:domain C] means roughly that P ‘can be applied to’ objects of type C, just like a type constraint in a programming language. Now all instances of eg:Man are also eg:Human so we can always apply eg:personalName to eg:Man things, doesn’t that mean eg:Man is in the domain of eg:personalName?”

There are two flaws in this line of thought. First, rdfs:domain isn’t really a constraint and doesn’t mean ‘can be applied to’. It means more or less the opposite, it enables an inference not imposes a constraint. [P rdfs:domain C] means that if you see a triple [X P foo] then you are licensed to deduce that X must be of type C. So we can see that if we make the illegal deduction [eg:personalName rdfs:domain eg:Man] then everything we applied eg:personalName to would become a eg:Man and we could no longer have things of type eg:Human which aren’t of type eg:Man. Whereas the correct deduction [eg:personalName rdfs:domain eg:Animal] is safe because every eg:Human is an eg:Animal so the domain deductions don’t tell us anything that wasn’t already true, so to speak!

The second flaw is in the phrasing “is in the domain of”. It is true that eg:Man is, in some sense, “in the domain of” eg:personalName but the correct translation of this loose phase is that “eg:Man is a subclass of the domain of eg:personalName” which is quite different from saying “eg:Man is the domain of eg:personalName.”